- Home
- Search Results
- Page 1 of 1
Search for: All records
-
Total Resources2
- Resource Type
-
0001000001000000
- More
- Availability
-
20
- Author / Contributor
- Filter by Author / Creator
-
-
Satici, Aykut C (2)
-
Bhounsule, Pranav A (1)
-
Dagher, Christopher (1)
-
Poonawala, Hasan A (1)
-
Sanchez, Sebastian (1)
-
Silva, Chandika (1)
-
Sirichotiyakul, Wankun (1)
-
#Tyler Phillips, Kenneth E. (0)
-
#Willis, Ciara (0)
-
& Abreu-Ramos, E. D. (0)
-
& Abramson, C. I. (0)
-
& Abreu-Ramos, E. D. (0)
-
& Adams, S.G. (0)
-
& Ahmed, K. (0)
-
& Ahmed, Khadija. (0)
-
& Aina, D.K. Jr. (0)
-
& Akcil-Okan, O. (0)
-
& Akuom, D. (0)
-
& Aleven, V. (0)
-
& Andrews-Larson, C. (0)
-
- Filter by Editor
-
-
& Spizer, S. M. (0)
-
& . Spizer, S. (0)
-
& Ahn, J. (0)
-
& Bateiha, S. (0)
-
& Bosch, N. (0)
-
& Brennan K. (0)
-
& Brennan, K. (0)
-
& Chen, B. (0)
-
& Chen, Bodong (0)
-
& Drown, S. (0)
-
& Ferretti, F. (0)
-
& Higgins, A. (0)
-
& J. Peters (0)
-
& Kali, Y. (0)
-
& Ruiz-Arias, P.M. (0)
-
& S. Spitzer (0)
-
& Sahin. I. (0)
-
& Spitzer, S. (0)
-
& Spitzer, S.M. (0)
-
(submitted - in Review for IEEE ICASSP-2024) (0)
-
-
Have feedback or suggestions for a way to improve these results?
!
Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
Learning policies for contact-rich manipulation is a challenging problem due to the presence of multiple contact modes with different dynamics, which complicates state and action exploration. Contact-rich motion planning uses simplified dynamics to reduce the search space dimension, but the found plans are then difficult to execute under the true object-manipulator dynamics. This paper presents an algorithm for learning controllers based on guided policy search, where motion plans based on simplified dynamics define rewards and sampling distributions for policy gradient-based learning. We demonstrate that our guided policy search method improves the ability to learn manipulation controllers, through a task involving pushing a box over a step.more » « less
-
Sirichotiyakul, Wankun; Satici, Aykut C; Sanchez, Sebastian; Bhounsule, Pranav A (, ASME-International Design Engineering & Technical Conference)In this work, we discuss the modeling, control, and implementation of a rimless wheel with a torso. We derive and compare two control methodologies: a discrete-time controller (DT) that updates the controls once-per-step and a continuous-time controller (CT) that updates gains continuously. For the discrete controller, we use least-squares estimation method to approximate the Poincare ́ map on a certain section and use discrete- linear-quadratic-regulator (DQLR) to stabilize a (closed-form) linearization of this map. For the continuous controller, we introduce moving Poincare ́ sections and stabilize the transverse dynamics along these moving sections. For both controllers, we estimate the region of attraction of the closed-loop system using sum-of-squares methods. Analysis of the impact map yields a refinement of the controller that stabilizes a steady-state walking gait with minimal energy loss. We present both simulation and experimental results that support the validity of the proposed approaches. We find that the CT controller has a larger region of attraction and smoother stabilization as compared with the DT controller.more » « less
An official website of the United States government

Full Text Available